我们为环境辅助生活(AAL)提出了一种新型的多模式传感器融合方法,该方法利用了使用特权信息(LUPI)学习的优势。我们解决了标准多模式方法的两个主要缺点,有限的面积覆盖率和降低的可靠性。我们的新框架将模幻幻觉的概念与三胞胎学习融合在一起,以训练具有不同模态的模型,以在推理时处理缺失的传感器。我们使用RGB视频和骨骼作为特权模式评估了来自可穿戴加速度计设备的惯性数据的拟议模型,并在UTD-MHAD数据集中表现出平均6.6%的准确性,平均为5.5%,伯克利MHAD MHAD DATASET的准确性为5.5%,在这些数据集上达到新的最新唯一分类精度。我们通过几项消融研究来验证我们的框架。
translated by 谷歌翻译
在本文中,我们评估了以自我为中心数据的最新OCR方法。我们在Epic-Kitchens图像中注释文本,并证明现有的OCR方法与旋转的文本难以抗争,这是在处理物体上经常观察到的。我们引入了一个简单的旋转和合并过程,该过程可以应用于预先训练的OCR模型,该模型将标准化的编辑距离误差减半。这表明未来的OCR尝试应将旋转纳入模型设计和培训程序。
translated by 谷歌翻译
我们提出了一个临时投票网络(TVNet),用于在未经监控的视频中进行行动定位。这包括一个新的投票证据模块来定位时间边界,更准确地,其中累积时间上下侧证据以预测开始和结束动作边界的帧级概率。我们独立于行动的证据模块纳入管道内,以计算置信度分数和行动课程。我们在ActivityNet-1.3上达到34.6%的平均地图,特别优于以前的方法0.95。TVNET在与PGCN结合和59.1%时,TVCN在0.5 IOU上的PGCN和59.1%上的距离在Thumos14上的距离和所有阈值以前的工作。我们的代码可在https://github.com/hanielwang/tvnet上获得。
translated by 谷歌翻译
First-person vision is gaining interest as it offers a unique viewpoint on people's interaction with objects, their attention, and even intention. However, progress in this challenging domain has been relatively slow due to the lack of sufficiently large datasets. In this paper, we introduce EPIC-KITCHENS, a large-scale egocentric video benchmark recorded by 32 participants in their native kitchen environments. Our videos depict non-scripted daily activities: we simply asked each participant to start recording every time they entered their kitchen. Recording took place in 4 cities (in North America and Europe) by participants belonging to 10 different nationalities, resulting in highly diverse cooking styles. Our dataset features 55 hours of video consisting of 11.5M frames, which we densely labelled for a total of 39.6K action segments and 454.3K object bounding boxes. Our annotation is unique in that we had the participants narrate their own videos (after recording), thus reflecting true intention, and we crowd-sourced ground-truths based on these. We describe our object, action and anticipation challenges, and evaluate several baselines over two test splits, seen and unseen kitchens.
translated by 谷歌翻译
This dissertation reports some first steps towards a compositional account of active inference and the Bayesian brain. Specifically, we use the tools of contemporary applied category theory to supply functorial semantics for approximate inference. To do so, we define on the `syntactic' side the new notion of Bayesian lens and show that Bayesian updating composes according to the compositional lens pattern. Using Bayesian lenses, and inspired by compositional game theory, we define categories of statistical games and use them to classify various problems of statistical inference. On the `semantic' side, we present a new formalization of general open dynamical systems (particularly: deterministic, stochastic, and random; and discrete- and continuous-time) as certain coalgebras of polynomial functors, which we show collect into monoidal opindexed categories (or, alternatively, into algebras for multicategories of generalized polynomial functors). We use these opindexed categories to define monoidal bicategories of cilia: dynamical systems which control lenses, and which supply the target for our functorial semantics. Accordingly, we construct functors which explain the bidirectional compositional structure of predictive coding neural circuits under the free energy principle, thereby giving a formal mathematical underpinning to the bidirectionality observed in the cortex. Along the way, we explain how to compose rate-coded neural circuits using an algebra for a multicategory of linear circuit diagrams, showing subsequently that this is subsumed by lenses and polynomial functors. Because category theory is unfamiliar to many computational neuroscientists and cognitive scientists, we have made a particular effort to give clear, detailed, and approachable expositions of all the category-theoretic structures and results of which we make use.
translated by 谷歌翻译
Transformers have proved to be very effective for visual recognition tasks. In particular, vision transformers construct compressed global representations through self-attention and learnable class tokens. Multi-resolution transformers have shown recent successes in semantic segmentation but can only capture local interactions in high-resolution feature maps. This paper extends the notion of global tokens to build GLobal Attention Multi-resolution (GLAM) transformers. GLAM is a generic module that can be integrated into most existing transformer backbones. GLAM includes learnable global tokens, which unlike previous methods can model interactions between all image regions, and extracts powerful representations during training. Extensive experiments show that GLAM-Swin or GLAM-Swin-UNet exhibit substantially better performances than their vanilla counterparts on ADE20K and Cityscapes. Moreover, GLAM can be used to segment large 3D medical images, and GLAM-nnFormer achieves new state-of-the-art performance on the BCV dataset.
translated by 谷歌翻译
This white paper lays out a vision of research and development in the field of artificial intelligence for the next decade (and beyond). Its denouement is a cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants$\unicode{x2014}$what we call ''shared intelligence''. This vision is premised on active inference, a formulation of adaptive behavior that can be read as a physics of intelligence, and which inherits from the physics of self-organization. In this context, we understand intelligence as the capacity to accumulate evidence for a generative model of one's sensed world$\unicode{x2014}$also known as self-evidencing. Formally, this corresponds to maximizing (Bayesian) model evidence, via belief updating over several scales: i.e., inference, learning, and model selection. Operationally, this self-evidencing can be realized via (variational) message passing or belief propagation on a factor graph. Crucially, active inference foregrounds an existential imperative of intelligent systems; namely, curiosity or the resolution of uncertainty. This same imperative underwrites belief sharing in ensembles of agents, in which certain aspects (i.e., factors) of each agent's generative world model provide a common ground or frame of reference. Active inference plays a foundational role in this ecology of belief sharing$\unicode{x2014}$leading to a formal account of collective intelligence that rests on shared narratives and goals. We also consider the kinds of communication protocols that must be developed to enable such an ecosystem of intelligences and motivate the development of a shared hyper-spatial modeling language and transaction protocol, as a first$\unicode{x2014}$and key$\unicode{x2014}$step towards such an ecology.
translated by 谷歌翻译
The 1$^{\text{st}}$ Workshop on Maritime Computer Vision (MaCVi) 2023 focused on maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicle (USV), and organized several subchallenges in this domain: (i) UAV-based Maritime Object Detection, (ii) UAV-based Maritime Object Tracking, (iii) USV-based Maritime Obstacle Segmentation and (iv) USV-based Maritime Obstacle Detection. The subchallenges were based on the SeaDronesSee and MODS benchmarks. This report summarizes the main findings of the individual subchallenges and introduces a new benchmark, called SeaDronesSee Object Detection v2, which extends the previous benchmark by including more classes and footage. We provide statistical and qualitative analyses, and assess trends in the best-performing methodologies of over 130 submissions. The methods are summarized in the appendix. The datasets, evaluation code and the leaderboard are publicly available at https://seadronessee.cs.uni-tuebingen.de/macvi.
translated by 谷歌翻译
Assigning qualified, unbiased and interested reviewers to paper submissions is vital for maintaining the integrity and quality of the academic publishing system and providing valuable reviews to authors. However, matching thousands of submissions with thousands of potential reviewers within a limited time is a daunting challenge for a conference program committee. Prior efforts based on topic modeling have suffered from losing the specific context that help define the topics in a publication or submission abstract. Moreover, in some cases, topics identified are difficult to interpret. We propose an approach that learns from each abstract published by a potential reviewer the topics studied and the explicit context in which the reviewer studied the topics. Furthermore, we contribute a new dataset for evaluating reviewer matching systems. Our experiments show a significant, consistent improvement in precision when compared with the existing methods. We also use examples to demonstrate why our recommendations are more explainable. The new approach has been deployed successfully at top-tier conferences in the last two years.
translated by 谷歌翻译
社会科学研究中文本数据的使用增加受益于易于访问的数据(例如Twitter)。这种趋势是以研究成本需要敏感但难以分享的数据的成本(例如,访谈数据,警察报告,电子健康记录)。我们使用开源文本匿名软件_textwash_介绍了该僵局的解决方案。本文使用TILD标准介绍了该工具的经验评估:技术评估(工具的准确性?),信息损失评估(匿名过程中丢失了多少信息?)和De-Nomenymisation Test(可以可以使用(可以可以可以使用)测试(可以可以使用匿名测试(可以人类从匿名文本数据中识别个人吗?)。研究结果表明,TextWash的性能类似于最新的实体识别模型,并引入了可忽略的信息损失0.84%。对于De-nonymisation测试,我们任命人类从众包人的描述数据集中对非常著名,半著名和不存在的个人的描述来识别个人。该工具的现实用例的匿名率范围为1.01-2.01%。我们在第二项研究中复制了发现,并得出结论,Textwash成功地删除了潜在的敏感信息,这些信息实际上使人描述实际上是匿名的。
translated by 谷歌翻译